663 research outputs found

    A Hierarchial Neural Network Implementation for Forecasting

    Get PDF
    In this paper, a hierarchical neural network architecture for forecasting time series is presented. The architecture is composed of two hierarchical levels using a maximum likelihood competitive learning algorithm. The first level of the system has three experts each using backpropagation and a gating network to partition the input space in order to map the input vectors to the output vectors. The second level of the hierarchical network has an expert using fuzzy ART for producing the correct trend coming from the first level. The experiments show that the resulting network is capable of forecasting the changes in the input and identifying the trends correctl

    ClassCut for Unsupervised Class Segmentation

    Get PDF
    Abstract. We propose a novel method for unsupervised class segmentation on a set of images. It alternates between segmenting object instances and learning a class model. The method is based on a segmentation energy defined over all images at the same time, which can be optimized efficiently by techniques used before in interactive segmentation. Over iterations, our method progressively learns a class model by integrating observations over all images. In addition to appearance, this model captures the location and shape of the class with respect to an automatically determined coordinate frame common across images. This frame allows us to build stronger shape and location models, similar to those used in object class detection. Our method is inspired by interactive segmentation methods [1], but it is fully automatic and learns models characteristic for the object class rather than specific to one particular object/image. We experimentally demonstrate on the Caltech4, Caltech101, and Weizmann horses datasets that our method (a) transfers class knowledge across images and this improves results compared to segmenting every image independently; (b) outperforms Grabcut [1] for the task of unsupervised segmentation; (c) offers competitive performance compared to the state-of-the-art in unsupervised segmentation and in particular it outperforms the topic model [2].

    Reducing Occupational Distress in Veterinary Medicine Personnel with Acceptance and Commitment Training: A Pilot Study

    Get PDF
    Aims To determine whether an educational programme targeting the reaction of veterinary personnel to difficult client interactions reduced burden transfer, stress and burnout in veterinary staff. Methods Employees of three small-animal veterinary hospitals in the south-western United States of America were recruited and randomised to intervention (educational programme; n = 16) or control (no intervention; n = 18) groups. Participants of this randomised, parallel arms trial completed pre-programme assessment including the Burden Transfer Inventory (BTI), Perceived Stress Scale, and Copenhagen Burnout Inventory. Assessment was followed by two, group-format educational sessions, based on acceptance and commitment training, tailored to reducing reactivity to difficult veterinary client interactions (intervention group only). After training was completed, both groups were assessed using the same measures and the intervention participants provided use and acceptability ratings. Results Intervention participants rated the programme as useful and appropriate, and reported that programme techniques were used a median of 43 (min 9, max 68) times during the 2 weeks prior to retesting. Relative to pre-programme scores, median post-programme scores for reaction (subscore of BTI) to difficult client interactions decreased in the intervention group (33 vs. 54; p = 0.047), but not in the control group (51 vs. 59; p = 0.210). Changes in median scores for stress and burnout from pre- to post-programme were non-significant for both groups. Conclusions This pilot and feasibility trial showed high rates of acceptability and use by participants, as well as promising reductions in burden transfer. A larger scale clinical trial with follow-up at extended time points is needed to more fully examine the efficacy of this novel programme

    On strongly chordal graphs that are not leaf powers

    Full text link
    A common task in phylogenetics is to find an evolutionary tree representing proximity relationships between species. This motivates the notion of leaf powers: a graph G = (V, E) is a leaf power if there exist a tree T on leafset V and a threshold k such that uv is an edge if and only if the distance between u and v in T is at most k. Characterizing leaf powers is a challenging open problem, along with determining the complexity of their recognition. This is in part due to the fact that few graphs are known to not be leaf powers, as such graphs are difficult to construct. Recently, Nevries and Rosenke asked if leaf powers could be characterized by strong chordality and a finite set of forbidden subgraphs. In this paper, we provide a negative answer to this question, by exhibiting an infinite family \G of (minimal) strongly chordal graphs that are not leaf powers. During the process, we establish a connection between leaf powers, alternating cycles and quartet compatibility. We also show that deciding if a chordal graph is \G-free is NP-complete, which may provide insight on the complexity of the leaf power recognition problem

    On the Price of Anarchy for flows over time

    Get PDF
    Dynamic network flows, or network flows over time, constitute an important model for real-world situations where steady states are unusual, such as urban traffic and the Internet. These applications immediately raise the issue of analyzing dynamic network flows from a game-theoretic perspective. In this paper we study dynamic equilibria in the deterministic fluid queuing model in single-source single-sink networks, arguably the most basic model for flows over time. In the last decade we have witnessed significant developments in the theoretical understanding of the model. However, several fundamental questions remain open. One of the most prominent ones concerns the Price of Anarchy, measured as the worst case ratio between the minimum time required to route a given amount of flow from the source to the sink, and the time a dynamic equilibrium takes to perform the same task. Our main result states that if we could reduce the inflow of the network in a dynamic equilibrium, then the Price of Anarchy is exactly e/(e − 1) ≈ 1.582. This significantly extends a result by Bhaskar, Fleischer, and Anshelevich (SODA 2011). Furthermore, our methods allow to determine that the Price of Anarchy in parallel-link networks is exactly 4/3. Finally, we argue that if a certain very natural monotonicity conjecture holds, the Price of Anarchy in the general case is exactly e/(e − 1)

    Understanding edge-connectivity in the Internet through core-decomposition

    Get PDF
    Internet is a complex network composed by several networks: the Autonomous Systems, each one designed to transport information efficiently. Routing protocols aim to find paths between nodes whenever it is possible (i.e., the network is not partitioned), or to find paths verifying specific constraints (e.g., a certain QoS is required). As connectivity is a measure related to both of them (partitions and selected paths) this work provides a formal lower bound to it based on core-decomposition, under certain conditions, and low complexity algorithms to find it. We apply them to analyze maps obtained from the prominent Internet mapping projects, using the LaNet-vi open-source software for its visualization

    Construction and Random Generation of Hypergraphs with Prescribed Degree and Dimension Sequences

    Full text link
    We propose algorithms for construction and random generation of hypergraphs without loops and with prescribed degree and dimension sequences. The objective is to provide a starting point for as well as an alternative to Markov chain Monte Carlo approaches. Our algorithms leverage the transposition of properties and algorithms devised for matrices constituted of zeros and ones with prescribed row- and column-sums to hypergraphs. The construction algorithm extends the applicability of Markov chain Monte Carlo approaches when the initial hypergraph is not provided. The random generation algorithm allows the development of a self-normalised importance sampling estimator for hypergraph properties such as the average clustering coefficient.We prove the correctness of the proposed algorithms. We also prove that the random generation algorithm generates any hypergraph following the prescribed degree and dimension sequences with a non-zero probability. We empirically and comparatively evaluate the effectiveness and efficiency of the random generation algorithm. Experiments show that the random generation algorithm provides stable and accurate estimates of average clustering coefficient, and also demonstrates a better effective sample size in comparison with the Markov chain Monte Carlo approaches.Comment: 21 pages, 3 figure
    corecore